home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
NetNews Offline 2
/
NetNews Offline Volume 2.iso
/
news
/
comp
/
std
/
c
/
369
< prev
next >
Wrap
Internet Message Format
|
1996-08-06
|
2KB
Path: theory.lcs.mit.edu!wald
From: wald@theory.lcs.mit.edu (David Wald)
Newsgroups: comp.std.c
Subject: sizeof(char) ~= sizeof(float)
Date: 24 Feb 1996 18:15:32 GMT
Organization: Theory of Computation, LCS, MIT
Message-ID: <WALD.96Feb24131532@woodpecker.lcs.mit.edu>
NNTP-Posting-Host: woodpecker.lcs.mit.edu
During a recent recurrence of the perennial "how big is an int"
discussion on comp.lang.c.moderated, I described a freestanding C
implementation I'd dealt with in which, as in some other word-based
implementations, sizeof(char) == sizeof(long) == sizeof(float).
However, one quirk of this architecture was that not all
interpretations of a memory word could see all bits. In particular,
though integral types and floats each took one word, a float used 8
more bits than the integral types. When a word was used in integral
operations, the lower 8 bits were invisible and harmless. Thus you
could have two sections of memory which could be distinguished when
viewed through float*'s but not when viewed through char*'s.
Can such an implementation be ANSI-conformant? The basic argument
against, presented by Tanmoy Bhattacharya, is that memory compare, and
possibly memory copy functions, can't be implemented in a portable
fashion if two dissimilar regions of memory can't be distinguished by
comparing them char-wise. (I'm unsure about the memory copy, since
I've forgotten whether the implementation actually preserved the
hidden 8 bits when you copied a char.) Does this violate a constraint
of the standard? At Tanmoy's suggestion, I'm putting the question
before the collected wisdom comp.std.c.
-David
--
============================================================================
David Wald http://theory.lcs.mit.edu/~wald/ wald@theory.lcs.mit.edu
============================================================================